Optimality condition and iterative thresholding algorithm for $$l_p$$ l p -regularization problems

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularization: Convergence of Iterative Half Thresholding Algorithm

In recent studies on sparse modeling, the nonconvex regularization approaches (particularly, Lq regularization with q ∈ (0, 1)) have been demonstrated to possess capability of gaining much benefit in sparsity-inducing and efficiency. As compared with the convex regularization approaches (say, L1 regularization), however, the convergence issue of the corresponding algorithms are more difficult t...

متن کامل

Optimality condition and iterative thresholding algorithm for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_p$$\end{document}lp-regularization problems

where A ∈ Rm×n, b ∈ R, ∈ (0,∞), �s�p = ∑n i=1 |si|, p ∈ (0, 1). The problem (1) has a broad applications in compressive sensing, variable selection problems and sparse least squares fitting for high dimensional data (see Chartrand and Staneva 2008; Fan and Li 2001; Foucart and Lai 2009; Frank and Freidman 1993; Ge et al. 2011; Huang et al. 2008; Knight and Wu 2000; Lai and Wang 2011; Natarajan ...

متن کامل

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also kno...

متن کامل

Optimality and Complexity for Constrained Optimization Problems with Nonconvex Regularization

In this paper, we consider a class of constrained optimization problems where the feasible set is a general closed convex set and the objective function has a nonsmooth, nonconvex regularizer. Such regularizer includes widely used SCAD, MCP, logistic, fraction, hard thresholding and non-Lipschitz Lp penalties as special cases. Using the theory of the generalized directional derivative and the t...

متن کامل

A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems

Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-S...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SpringerPlus

سال: 2016

ISSN: 2193-1801

DOI: 10.1186/s40064-016-3516-3